47 research outputs found

    A Surface-based In-House Network Medium for Power, Communication and Interaction

    Get PDF
    Recent advances in communication and signal processing methodologies have paved the way for a high speed home network Power Line Communication (PLC) system. The development of powerline communications and powerline control as a cost effective and rapid mechanism for delivering communication and control services are becoming attractive in PLC application, to determine the best mix of hard and software to support infrastructure development for particular applications using power line communication. Integrating appliances in the home through a wired network often proves to be impractical: routing cables is usually difficult, changing the network structure afterwards even more so, and portable devices can only be connected at fixed connection points. Wireless networks aren’t the answer either: batteries have to be regularly replaced or changed, and what they add to the device’s size and weight might be disproportionate for smaller appliances. In Pin&Play, we explore a design space in between typical wired and wireless networks, investigating the use of surfaces to network objects that are attached to it. This article gives an overview of the network model, and describes functioning prototypes that were built as a proof of concept. The first phase of the development is already demonstrated both in appropriate conferences and publications. [1] The intention of researchers is to introduce this work to powerline community; as this research enters phase II of the Pin&Play architecture to investigate, develop prototype systems, and conduct studies in two concrete application areas. The first area is user-centric and concerned with support for collaborative work on large surfaces. The second area is focused on exhibition spaces and trade fairs, and concerned with combination of physical media such as movable walls and digital infrastructure for fast deployment of engaging installations. In this paper we have described the functionality of the Pin&Play architecture and introduced the second phase together with future plans. Figure 1 shows technical approach, using a surface with simple layered structure Pushpin connectors, dual pin or coaxial

    Gaze+touch vs. touch: what’s the trade-off when using gaze to extend touch to remote displays?

    Get PDF
    Direct touch input is employed on many devices, but it is inherently restricted to displays that are reachable by the user. Gaze input as a mediator can extend touch to remote displays - using gaze for remote selection, and touch for local manipulation - but at what cost and benefit? In this paper, we investigate the potential trade-off with four experiments that empirically compare remote Gaze+touch to standard touch. Our experiments investigate dragging, rotation, and scaling tasks. Results indicate that Gaze+touch is, compared to touch, (1) equally fast and more accurate for rotation and scaling, (2) slower and less accurate for dragging, and (3) enables selection of smaller targets. Our participants confirm this trend, and are positive about the relaxed finger placement of Gaze+touch. Our experiments provide detailed performance characteristics to consider for the design of Gaze+touch interaction of remote displays. We further discuss insights into strengths and drawbacks in contrast to direct touch

    AmbiGaze:direct control of ambient devices by gaze

    Get PDF
    Eye tracking offers many opportunities for direct device control in smart environments, but issues such as the need for calibration and the Midas touch problem make it impractical. In this paper, we propose AmbiGaze, a smart environment that employs the animation of targets to provide users with direct control of devices by gaze only through smooth pursuit tracking. We propose a design space of means of exposing functionality through movement and illustrate the concept through four prototypes. We evaluated the system in a user study and found that AmbiGaze enables robust gaze-only interaction with many devices, from multiple positions in the environment, in a spontaneous and comfortable manner

    TraceMatch:a computer vision technique for user input by tracing of animated controls

    Get PDF
    Recent works have explored the concept of movement correlation interfaces, in which moving objects can be selected by matching the movement of the input device to that of the desired object. Previous techniques relied on a single modality (e.g. gaze or mid-air gestures) and specific hardware to issue commands. TraceMatch is a computer vision technique that enables input by movement correlation while abstracting from any particular input modality. The technique relies only on a conventional webcam to enable users to produce matching gestures with any given body parts, even whilst holding objects. We describe an implementation of the technique for acquisition of orbiting targets, evaluate algorithm performance for different target sizes and frequencies, and demonstrate use of the technique for remote control of graphical as well as physical objects with different body parts

    Look together: using gaze for assisting co-located collaborative search

    Get PDF
    Gaze information provides indication of users focus which complements remote collaboration tasks, as distant users can see their partner’s focus. In this paper, we apply gaze for co-located collaboration, where users’ gaze locations are presented on the same display, to help collaboration between partners. We integrated various types of gaze indicators on the user interface of a collaborative search system, and we conducted two user studies to understand how gaze enhances coordination and communication between co-located users. Our results show that gaze indeed enhances co-located collaboration, but with a trade-off between visibility of gaze indicators and user distraction. Users acknowledged that seeing gaze indicators eases communication, because it let them be aware of their partner’s interests and attention. However, users can be reluctant to share their gaze information due to trust and privacy, as gaze potentially divulges their interests

    SaccadeMachine:Software for Analyzing Saccade Tests (Anti-Saccade and Pro-saccade)

    Get PDF
    Various types of saccadic paradigms, in particular, Prosaccade and Antisaccade tests are widely used in Pathophysiology and Psychology. Despite been widely used, there has not been a standard tool for processing and analyzing the eye tracking data obtained from saccade tests. We describe an open-source software for extracting and analyzing the eye movement data of different types of saccade tests that can be used to extract and compare participants' performance and various task-related measures across participants. We further demonstrate the utility of the software by using it to analyze the data from an antisaccade, and a recent distractor experiment

    EyeSeeThrough:Unifying Tool Selection and Application in Virtual Environments

    Get PDF
    In 2D interfaces, actions are often represented by fixed tools arranged in menus, palettes, or dedicated parts of a screen, whereas 3D interfaces afford their arrangement at different depths relative to the user and the user can move them relative to each other. In this paper, we introduce EyeSeeThrough as a novel interaction technique that utilizes eye-tracking in VR. The user can apply an action to an intended object by visually aligning the object with the tool at the line-of-sight, and then issue a confirmation command. The underlying idea is to merge the two-step process of 1) selection of a mode in a menu and 2) applying it to a target, into one unified interaction. We present a user study where we compare the method to the baseline two-step selection. The results of our user study showed that our technique outperforms the two step selection in terms of speed and comfort. We further developed a prototype of a virtual living room to demonstrate the practicality of the proposed technique

    PSOVIS:An interactive tool for extracting post-saccadic oscillations from eye movement data

    Get PDF
    Post-microsaccadic eye movements recorded by high frame-rate pupil-based eye trackers reflect movements of different ocular structures such as deformation of the iris and pupil- eyeball relative movement as well as the dynamic overshoot of the eye globe at the end of each saccade. These Post-Saccadic Oscillations (PSO) exhibit a high degree of reproducibility across saccades and within participants. Therefore in order to study the characteristics of the post-saccadic eye movements, it is often desirable to extract the post-saccadic parts of the recorded saccades and to look at the ending part of all saccades. In order to ease the study- ing of PSO eye movements, a simple tool for extracting PSO signals from the eye movement recordings has been developed. The software application implements functions for extracting, aligning, visualising and finally exporting the PSO signals from eye movement recordings, to be used for post-processing. The code which is written in Python can be download from https://github.com/dmardanbeigi/PSOVIS.gi
    corecore